前言
目的是iscsi(tgt)+ librbd
模式如何做到在线扩容或更新配置操作,在线的意思是不中断客户端连接。这里假设你已经配置好了ceph集群。
1、配置基础环境
创建一个2G的卷img01,并配置好tgtd服务:
[root@ceph07 ~]# cat /etc/tgt/conf.d/my.conf
<target iqn.2019-09.com.example:target01>
<backing-store rbd/img01>
lun 1
vendor_id iqn.2019-09.com.example
</backing-store>
bs-type rbd
initiator-address 192.168.0.0/16
</target>
启动tgtd服务,查看配置信息:
[root@ceph07 ~]# tgt-admin -s
Target 1: iqn.2019-09.com.example:target01
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 2147 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rbd
Backing store path: rbd/img01
Backing store flags:
Account information:
ACL information:
192.168.0.0/16
在客户端发现、登陆这个target:
[root@ceph05 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.10.32
192.168.10.32:3260,1 iqn.2019-09.com.example:target01
[root@ceph05 ~]#
[root@ceph05 ~]# iscsiadm -m node -T iqn.2019-09.com.example:target01 -p 192.168.10.32 -l
Logging in to [iface: default, target: iqn.2019-09.com.example:target01, portal: 192.168.10.32,3260] (multiple)
Login to [iface: default, target: iqn.2019-09.com.example:target01, portal: 192.168.10.32,3260] successful.
查看设备信息,sde就是目标设备:
[root@ceph05 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─centos-root 253:0 0 17G 0 lvm /
└─centos-swap 253:1 0 2G 0 lvm [SWAP
sdb 8:16 0 20G 0 disk
└─ceph--694b6932--0f9c--4f98--a215--d7eedbab6c2d-osd--block--9b460e9d--4b20--4fce--86cf--8f8c6f9f5f7e 253:3 0 20G 0 lvm
sdc 8:32 0 20G 0 disk
└─ceph--d67d8496--5e1b--461b--a63e--e2e644ffbfea-osd--block--84314de0--abfd--457b--8b3f--0ec1191f0003 253:2 0 20G 0 lvm
sdd 8:48 0 20G 0 disk
sde 8:64 0 2G 0 disk
sr0 11:0 1 4.3G 0 rom
格式化、挂载并写入一些数据:
[root@ceph05 ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-root xfs 17G 3.4G 14G 20% /
devtmpfs devtmpfs 898M 0 898M 0% /dev
tmpfs tmpfs 910M 0 910M 0% /dev/shm
tmpfs tmpfs 910M 9.5M 901M 2% /run
tmpfs tmpfs 910M 0 910M 0% /sys/fs/cgroup
/dev/sda1 xfs 1014M 146M 869M 15% /boot
tmpfs tmpfs 910M 24K 910M 1% /var/lib/ceph/osd/ceph-0
tmpfs tmpfs 910M 24K 910M 1% /var/lib/ceph/osd/ceph-1
tmpfs tmpfs 182M 0 182M 0% /run/user/0
/dev/sde xfs 2.0G 105M 1.9G 6% /mnt
2、扩容和更新配置
对img01扩容到4G:
[root@ceph07 ~]# rbd resize img01 --size 4G
Resizing image: 100% complete...done.
[root@ceph07 ~]# rbd info img01
rbd image 'img01':
size 4GiB in 1024 objects
order 22 (4MiB objects)
block_name_prefix: rbd_data.5e3a6b8b4567
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
create_timestamp: Fri Nov 15 11:00:39 2019
然后给这个target再配置一个lun:
[root@ceph07 ~]# rbd create img03 --size 2G
[root@ceph07 ~]# cat /etc/tgt/conf.d/my.conf
<target iqn.2019-09.com.example:target01>
<backing-store rbd/img01>
lun 1
vendor_id iqn.2019-09.com.example
</backing-store>
<backing-store rbd/img03>
lun 2
vendor_id iqn.2019-09.com.example
</backing-store>
bs-type rbd
initiator-address 192.168.0.0/16
</target>
更新target配置:
[root@ceph07 ~]# /etc/init.d/tgtd forcedreload
Force-updating target framework daemon configuration
查看是否更新成功:
[root@ceph07 ~]# tgt-admin -s
Target 1: iqn.2019-09.com.example:target01
System information:
Driver: iscsi
State: ready
I_T nexus information:
I_T nexus: 14
Initiator: iqn.2019-11.com.example:client001 alias: ceph05
Connection: 0
IP Address: 192.168.10.30
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 4295 MB, Block size: 512 <--- 这里可以看到容量已经是4G,扩容生效
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rbd
Backing store path: rbd/img01
Backing store flags:
LUN: 2 <--- 这里可以看到新增加的img02已经生效
Type: disk
SCSI ID: IET 00010002
SCSI SN: beaf12
Size: 2147 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rbd
Backing store path: rbd/img03
Backing store flags:
Account information:
ACL information:
192.168.0.0/16
到客户端进行扩容:
[root@ceph05 ~]# lsscsi
[0:0:0:0] disk VMware, VMware Virtual S 1.0 /dev/sda
[0:0:1:0] disk VMware, VMware Virtual S 1.0 /dev/sdb
[0:0:2:0] disk VMware, VMware Virtual S 1.0 /dev/sdc
[0:0:3:0] disk VMware, VMware Virtual S 1.0 /dev/sdd
[2:0:0:0] cd/dvd NECVMWar VMware IDE CDR10 1.00 /dev/sr0
[3:0:0:0] storage IET Controller 0001 -
[3:0:0:1] disk iqn.2019 VIRTUAL-DISK 0001 /dev/sde
[root@ceph05 ~]# echo "---" >/sys/class/scsi_device/3\:0\:0\:1/device/rescan
[root@ceph05 ~]# xfs_growfs /dev/sde
meta-data=/dev/sde isize=512 agcount=4, agsize=131072 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=524288, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 524288 to 1048576
扩容完成:
[root@ceph05 ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-root xfs 17G 3.4G 14G 20% /
devtmpfs devtmpfs 898M 0 898M 0% /dev
tmpfs tmpfs 910M 0 910M 0% /dev/shm
tmpfs tmpfs 910M 9.5M 901M 2% /run
tmpfs tmpfs 910M 0 910M 0% /sys/fs/cgroup
/dev/sda1 xfs 1014M 146M 869M 15% /boot
tmpfs tmpfs 910M 24K 910M 1% /var/lib/ceph/osd/ceph-0
tmpfs tmpfs 910M 24K 910M 1% /var/lib/ceph/osd/ceph-1
tmpfs tmpfs 182M 0 182M 0% /run/user/0
/dev/sde xfs 4.0G 105M 3.9G 3% /mnt
3、总结
- 更新lun的容量或者更新其他配置后,执行更新命令:
/etc/init.d/tgtd forcedreload
,该操作不会断开客户端的连接。 - 服务端扩容后,客户端需要做对应操作才能应用新容量。
- echo “—“ >/sys/class/scsi_device/{SCSI设备id}/device/rescan
- xfs_growfs /dev/sde,其他文件系统扩容命令有所不同